310 research outputs found

    Quantified Degrees of Group Responsibility (Extended Abstract)

    Full text link
    This paper builds on an existing notion of group responsibility and proposes two ways to define the degree of group responsibility: structural and functional degrees of responsibility. These notions measure the potential responsibilities of (agent) groups for avoiding a state of affairs. According to these notions, a degree of responsibility for a state of affairs can be assigned to a group of agents if, and to the extent that, the group has the potential to preclude the state of affairs.Comment: Presented in the 27th Belgian-Netherlands Conference on Artificial Intelligence (BNAIC 2015), Hasselt, Belgiu

    Verifying existence of resource-bounded coalition uniform strategies

    Get PDF
    We consider the problem of whether a coalition of agents has a knowledge-based strategy to ensure some outcome under a resource bound. We extend previous work on verification of multi-agent systems where actions of agents produce and consume resources, by adding epistemic pre- and postconditions to actions. This allows us to model scenarios where agents perform both actions which change the world, and actions which change their knowledge about the world, such as observation and communication. To avoid logical omniscience and obtain a compact model of the system, our model of agents’ knowledge is syntactic.We define a class of coalition-uniform strategies with respect to any (decidable) notion of coalition knowledge. We show that the model-checking problem for the resulting logic is decidable for any notion of coalition uniform strategies in these classes

    Norm approximation for imperfect monitors

    Get PDF
    In this paper, we consider the runtime monitoring of norms with imperfect monitors. A monitor is imperfect for a norm if it has insufficient observational capabilities to determine if a given execution trace of a multi-agent system complies with or violates the norm. One approach to the problem of imperfect monitors is to enhance the observational capabilities of the normative organisation. However this may be costly or in some cases impossible. Instead we show how to synthesise an approximation of an 'ideal' norm that can be perfectly monitored given a monitor, and which is optimal in the sense that any other approximation would fail to detect at least as many violations of the ideal norm. We give a logical analysis of (im)perfect monitors. We state the computational complexity of the norm approximation problem, and give an optimal algorithm for generating optimal approximations of norms given a monitor

    Providing personalized Explanations: a Conversational Approach

    Full text link
    The increasing applications of AI systems require personalized explanations for their behaviors to various stakeholders since the stakeholders may have various knowledge and backgrounds. In general, a conversation between explainers and explainees not only allows explainers to obtain the explainees' background, but also allows explainees to better understand the explanations. In this paper, we propose an approach for an explainer to communicate personalized explanations to an explainee through having consecutive conversations with the explainee. We prove that the conversation terminates due to the explainee's justification of the initial claim as long as there exists an explanation for the initial claim that the explainee understands and the explainer is aware of

    Practical run-time norm enforcement with bounded lookahead

    Get PDF
    Norms have been widely proposed as a means of coordinating and controlling the behaviour of agents in a multi-agent system. A key challenge in normative MAS is norm enforcement: how and when to restrict the agents’ behaviour in order to obtain a desirable outcome? Even if a norm can be enforced theoretically, it may not be enforceable in a grounded, practical setting. In this paper we study the problem of practical norm enforcement. The key notion is that of a guard. Guards are functions which restrict the possible actions after a history of events. We propose a formal, computational model of norms, guards and norm enforcement, based on linear-time temporal logic with past operators. We show that not all norms can be enforced by such guard functions, even in the presence of unlimited computational power to reason about future events. We analyse which norms can be enforced by guards if only a fixed lookahead is available. We investigate decision problems for this question with respect to specific classes of norms, related to safety and liveness properties

    What Do You Care About: Inferring Values from Emotions

    Full text link
    Observers can glean information from others' emotional expressions through the act of drawing inferences from another individual's emotional expressions. It is important for socially aware artificial systems to be capable of doing that as it can facilitate social interaction among agents, and is particularly important in human-robot interaction for supporting a more personalized treatment of users. In this short paper, we propose a methodology for developing a formal model that allows agents to infer another agent's values from her emotion expressions
    • …
    corecore